robust error
Provably Robust Metric Learning
Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied. In this paper, we show that existing metric learning algorithms, which focus on boosting the clean accuracy, can result in metrics that are less robust than the Euclidean distance. To overcome this problem, we propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations, and the robustness of the resulting model is certifiable. Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors (errors under adversarial attacks). Furthermore, unlike neural network defenses which usually encounter a trade-off between clean and robust errors, our method does not sacrifice clean errors compared with previous metric learning methods.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
f56d8183992b6c54c92c16a8519a6e2b-AuthorFeedback.pdf
Clean and robust error on the test set under various adversarial attacks. Specifically, for example, 28. 25(47) stands for 28 .25 0 . We thank the reviewers for their constructive comments. The requested additional experiments are presented above. Gradient scattering is measured as the first-order gradient difference, i.e., The architecture of our MNIST models are the same as the ones in the challenge.
A Additional preliminaries for Section 2 Complexity measures. The capacity measures, VC
See Definitions 1.1 and 1.2 for the See [ 2, Section 2 and Appendix C] for more details. Before proceeding to the proof, we present the following result on learning partial concept classes. Recall the definition of VC is in the context of partial concepts (see Appendix A). 15 Theorem C.1 ([ 2 ], Theorem 34) The sample complexity of this model is defined formally in Definition E.1 . By taking the expectation on Eq. ( 3) we have, E We now prove Theorem 4.4 . Lemma C.2 (Agnostic sample compression generalization bound) We show that in our use case, we can deduce a stronger bound.
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Vietnam > Long An Province (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- North America > Canada (0.04)
- (4 more...)